Empirical Comparison of Probabilistic and Possibilistic Markov Decision Processes Algorithms
نویسنده
چکیده
Classical stochastic Markov Decision Processes (MDPs) and possibilistic MDPs ( -MDPs) aim at solving the same kind of problems, involving sequential decision making under uncertainty. The underlying uncertainty model (probabilistic / possibilistic) and preference model (reward / satisfaction degree) change, but the algorithms, based on dynamic programming, are similar. So, a question maybe raised about when to prefer one model to another, and for which reasons. The answer may seem obvious when the uncertainty is of an objective nature (symmetry of the problem, frequentist information) and when the problem is faced repetitively and rewards accumulate. It is less clear when uncertainty and preferences are qualitative, purely subjective and when the problem is faced only once. In this paper we carry out an empirical comparison of both types of algorithms (stochastic and possibilistic), in terms of \quality" of the solutions, and time needed to compute them.
منابع مشابه
Anytime Algorithms for Solving Possibilistic MDPs and Hybrid MDPs
The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sour...
متن کاملAnytime Algorithms for Solving Possibilistic MDPs and Hybrid MDPs
The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sour...
متن کاملAnytime Algorithms for Solving Possibilistic MDPs and Hybrid MDPs
The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sour...
متن کاملAnytime Algorithms for Solving Possibilistic MDPs and Hybrid MDPs
The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sour...
متن کاملAccelerated decomposition techniques for large discounted Markov decision processes
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorith...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000